36 research outputs found

    Special issue on smart interactions in cyber-physical systems: Humans, agents, robots, machines, and sensors

    Get PDF
    In recent years, there has been increasing interaction between humans and non‐human systems as we move further beyond the industrial age, the information age, and as we move into the fourth‐generation society. The ability to distinguish between human and non‐human capabilities has become more difficult to discern. Given this, it is common that cyber‐physical systems (CPSs) are rapidly integrated with human functionality, and humans have become increasingly dependent on CPSs to perform their daily routines.The constant indicators of a future where human and non‐human CPSs relationships consistently interact and where they allow each other to navigate through a set of non‐trivial goals is an interesting and rich area of research, discovery, and practical work area. The evidence of con- vergence has rapidly gained clarity, demonstrating that we can use complex combinations of sensors, artificial intelli- gence, and data to augment human life and knowledge. To expand the knowledge in this area, we should explain how to model, design, validate, implement, and experiment with these complex systems of interaction, communication, and networking, which will be developed and explored in this special issue. This special issue will include ideas of the future that are relevant for understanding, discerning, and developing the relationship between humans and non‐ human CPSs as well as the practical nature of systems that facilitate the integration between humans, agents, robots, machines, and sensors (HARMS).Fil: Kim, Donghan. Kyung Hee University;Fil: Rodriguez, Sebastian Alberto. Universidad TecnolĂłgica Nacional; Argentina. Consejo Nacional de Investigaciones CientĂ­ficas y TĂ©cnicas. Centro CientĂ­fico TecnolĂłgico Conicet - TucumĂĄn; ArgentinaFil: Matson, Eric T.. Purdue University; Estados UnidosFil: Kim, Gerard Jounghyun. Korea University

    The Effects of Engaging and Affective Behaviors of Virtual Agents in Group Decision-Making

    Full text link
    Virtual agents (VAs) need to exhibit engaged and affective behavior in order to become more effective social actors in our daily lives. However, such behaviors need to conform to social norms, especially in organizational settings. This study examines how different VA behaviors influence subjects' perceptions and actions in group decision-making processes. Participants exposed to VAs demonstrated varying levels of engagement and affective behavior during the group discussions. Engagement refers to the VA's focus on the group task, while affective behavior represents the VA's emotional state. The findings indicate that VA engagement positively influences user behavior, particularly in attention allocation. However, it has minimal impact on subjective perception. Conversely, affective expressions of VAs have a negative impact on subjective perceptions, such as social presence, social influence, and trustworthiness. Interestingly, in 64 discussions for tasks, only seven showed a decline in group scores compared to individual scores, and in six of these cases, the VA exhibited a non-engaged and affective state. We discuss the results and the potential implications for future research on using VAs in group meetings. It provides valuable insights for improving VA behavior as a team member in group decision-making scenarios and guides VA design in organizational contexts.Comment: Under Review. This work has been submitted to the IEEE for possible publication. Copyright may be transferred without notice, after which this version may no longer be accessibl

    Human-computer interaction: Fundamentals and practice

    No full text
    Boca Raton, FLxii, 162 hlm.: bibl. ref., fig.; 25 c

    Human-computer interaction : fundamentals and practice

    No full text
    xiii, 170 p. ; 24 c

    On Next Generation User Interfaces for Computer-Aided Design (CAD) Systems

    No full text
    This paper presents a review of different types of user interfaces used in current state of the art commercial and research prototype CAD systems. Two different perspectives are taken: one from the interface point of view and the other from the interaction point of view. This paper is not intended to be an exhaustive survey on the subject, but rather an assessment of a current trend. Then, I discuss and comment on basic requirements of CAD user interfaces and make a projection on future directions in CAD user interfaces, particularly for mechanical CAD systems. Keywords: Computer-Aided Design, User Interface 1 Introduction: Why "CAD" User Interface ? During the past 15 years, a great progress has been made in the world of computer-aided design (CAD) systems. Particularly, in the domain of threedimensional geometric CAD systems, compared to the early 80's when the first lines of wire-frame based modelers were introduced, today's advanced systems feature solid modeling, feature-based de..

    AudienceMR: Extending the Local Space for Large-Scale Audience with Mixed Reality for Enhanced Remote Lecturer Experience

    No full text
    AudienceMR is designed as a multi-user mixed reality space that seamlessly extends the local user space to become a large, shared classroom where some of the audience members are seen seated in a real space, and more members are seen through an extended portal. AudienceMR can provide a sense of the presence of a large-scale crowd/audience with the associated spatial context. In contrast to virtual reality (VR), however, with mixed reality (MR), a lecturer can deliver content or conduct a performance from a real, actual, comfortable, and familiar local space, while interacting directly with real nearby objects, such as a desk, podium, educational props, instruments, and office materials. Such a design will elicit a realistic user experience closer to an actual classroom, which is currently prohibitive owing to the COVID-19 pandemic. This paper validated our hypothesis by conducting a comparative experiment assessing the lecturer’s experience with two independent variables: (1) an online classroom platform type, i.e., a 2D desktop video teleconference, a 2D video screen grid in VR, 3D VR, and AudienceMR, and (2) a student depiction, i.e., a 2D upper-body video screen and a 3D full-body avatar. Our experiment validated that AudienceMR exhibits a level of anxiety and fear of public speaking closer to that of a real classroom situation, and a higher social and spatial presence than 2D video grid-based solutions and even 3D VR. Compared to 3D VR, AudienceMR offers a more natural and easily usable real object-based interaction. Most subjects preferred AudienceMR over the alternatives despite the nuisance of having to wear a video see-through headset. Such qualities will result in information conveyance and an educational efficacy comparable to those of a real classroom, and better than those achieved through popular 2D desktop teleconferencing or immersive 3D VR solutions

    AR Enabled IoT for a Smart and Interactive Environment: A Survey and Future Directions

    No full text
    Accompanying the advent of wireless networking and the Internet of Things (IoT), traditional augmented reality (AR) systems to visualize virtual 3D models of the real world are evolving into smart and interactive AR related to the context of things for physical objects. We propose the integration of AR and IoT in a complementary way, making AR scalable to cover objects everywhere with an acceptable level of performance and interacting with IoT in a more intuitive manner. We identify three key components for realizing such a synergistic integration: (1) distributed and object-centric data management (including for AR services); (2) IoT object-guided tracking; (3) seamless interaction and content interoperability. We survey the current state of these respective areas and herein discuss research on issues about realizing a future smart and interactive living environment

    Combining Interactive Exploration and Optimization for Assembly Design

    No full text
    This paper presents an integrated framework for assembly design. The framework allows the designer to represent knowledge about the design process and constraints, as well as information about the artifact being designed, design history and rationale. Because the complexity of assembly design leads to extremely large design spaces, adequately supporting design space exploration is a key issue that must be addressed. This is achieved in part by allowing the designer to use both top-down and bottom-up approaches to assembly design. Exploration of the design space is further enabled by incorporating a simulated annealing-based optimization tool that allows the designer to rapidly complete partial designs, refine complete designs, and generate multiple design alternatives. INTRODUCTION In order to design and optimize a product, designers must be able to consider di#erent alternatives, perform analysis to guide their own design process and focus in on a "good", if not optimal, design. It i..
    corecore